18 research outputs found

    Representation of the verb's argument-structure in the human brain

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>A verb's argument structure defines the number and relationships of participants needed for a complete event. One-argument (intransitive) verbs require only a subject to make a complete sentence, while two- and three-argument verbs (transitives and ditransitives) normally take direct and indirect objects. Cortical responses to verbs embedded into sentences (correct or with syntactic violations) indicate the processing of the verb's argument structure in the human brain. The two experiments of the present study examined whether and how this processing is reflected in distinct spatio-temporal cortical response patterns to isolated verbs and/or verbs presented in minimal context.</p> <p>Results</p> <p>The magnetoencephalogram was recorded while 22 native German-speaking adults saw 130 German verbs, presented one at a time for 150 ms each in experiment 1. Verb-evoked electromagnetic responses at 250 – 300 ms after stimulus onset, analyzed in source space, were higher in the left middle temporal gyrus for verbs that take only one argument, relative to two- and three-argument verbs. In experiment 2, the same verbs (presented in different order) were preceded by a proper name specifying the subject of the verb. This produced additional activation between 350 and 450 ms in or near the left inferior frontal gyrus, activity being larger and peaking earlier for one-argument verbs that required no further arguments to form a complete sentence.</p> <p>Conclusion</p> <p>Localization of sources of activity suggests that the activation in temporal and frontal regions varies with the degree by which representations of an event as a part of the verbs' semantics are completed during parsing.</p

    Word Processing differences between dyslexic and control children

    Get PDF
    BACKGROUND: The aim of this study was to investigate brain responses triggered by different wordclasses in dyslexic and control children. The majority of dyslexic children have difficulties to phonologically assemble a word from sublexical parts following grapheme-to-phoneme correspondences. Therefore, we hypothesised that dyslexic children should mainly differ from controls processing low frequent words that are unfamiliar to the reader. METHODS: We presented different wordclasses (high and low frequent words, pseudowords) in a rapid serial visual word (RSVP) design and performed wavelet analysis on the evoked activity. RESULTS: Dyslexic children had lower evoked power amplitudes and a higher spectral frequency for low frequent words compared to control children. No group differences were found for high frequent words and pseudowords. Control children had higher evoked power amplitudes and a lower spectral frequency for low frequent words compared to high frequent words and pseudowords. This pattern was not present in the dyslexic group. CONCLUSION: Dyslexic children differed from control children only in their brain responses to low frequent words while showing no modulated brain activity in response to the three word types. This might support the hypothesis that dyslexic children are selectively impaired reading words that require sublexical processing. However, the lacking differences between word types raise the question if dyslexic children were able to process the words presented in rapid serial fashion in an adequate way. Therefore the present results should only be interpreted as evidence for a specific sublexical processing deficit with caution

    Brain regions essential for improved lexical access in an aged aphasic patient: a case report

    Get PDF
    BACKGROUND: The relationship between functional recovery after brain injury and concomitant neuroplastic changes is emphasized in recent research. In the present study we aimed to delineate brain regions essential for language performance in aphasia using functional magnetic resonance imaging and acquisition in a temporal sparse sampling procedure, which allows monitoring of overt verbal responses during scanning. CASE PRESENTATION: An 80-year old patient with chronic aphasia (2 years post-onset) was investigated before and after intensive language training using an overt picture naming task. Differential brain activation in the right inferior frontal gyrus for correct word retrieval and errors was found. Improved language performance following therapy was mirrored by increased fronto-thalamic activation while stability in more general measures of attention/concentration and working memory was assured. Three healthy age-matched control subjects did not show behavioral changes or increased activation when tested repeatedly within the same 2-week time interval. CONCLUSION: The results bear significance in that the changes in brain activation reported can unequivocally be attributed to the short-term training program and a language domain-specific plasticity process. Moreover, it further challenges the claim of a limited recovery potential in chronic aphasia, even at very old age. Delineation of brain regions essential for performance on a single case basis might have major implications for treatment using transcranial magnetic stimulation

    The Nature of Abstract Orthographic Codes: Evidence from Masked Priming and Magnetoencephalography

    Get PDF
    What kind of mental objects are letters? Research on letter perception has mainly focussed on the visual properties of letters, showing that orthographic representations are abstract and size/shape invariant. But given that letters are, by definition, mappings between symbols and sounds, what is the role of sound in orthographic representation? We present two experiments suggesting that letters are fundamentally sound-based representations. To examine the role of sound in orthographic representation, we took advantage of the multiple scripts of Japanese. We show two types of evidence that if a Japanese word is presented in a script it never appears in, this presentation immediately activates the (“actual”) visual word form of that lexical item. First, equal amounts of masked repetition priming are observed for full repetition and when the prime appears in an atypical script. Second, visual word form frequency affects neuromagnetic measures already at 100–130 ms whether the word is presented in its conventional script or in a script it never otherwise appears in. This suggests that Japanese orthographic codes are not only shape-invariant, but also script invariant. The finding that two characters belonging to different writing systems can activate the same form representation suggests that sound identity is what determines orthographic identity: as long as two symbols express the same sound, our minds represent them as part of the same character/letter

    Neurophysiological evidence for rapid processing of verbal and gestural information in understanding communicative actions

    Get PDF
    During everyday social interaction, gestures are a fundamental part of human communication. The communicative pragmatic role of hand gestures and their interaction with spoken language has been documented at the earliest stage of language development, in which two types of indexical gestures are most prominent: the pointing gesture for directing attention to objects and the give-me gesture for making requests. Here we study, in adult human participants, the neurophysiological signatures of gestural-linguistic acts of communicating the pragmatic intentions of naming and requesting by simultaneously presenting written words and gestures. Already at ~150 ms, brain responses diverged between naming and request actions expressed by word-gesture combination, whereas the same gestures presented in isolation elicited their earliest neurophysiological dissociations significantly later (at ~210 ms). There was an early enhancement of request-evoked brain activity as compared with naming, which was due to sources in the frontocentral cortex, consistent with access to action knowledge in request understanding. In addition, an enhanced N400-like response indicated late semantic integration of gesture-language interaction. The present study demonstrates that word-gesture combinations used to express communicative pragmatic intentions speed up the brain correlates of comprehension processes – compared with gesture-only understanding – thereby calling into question current serial linguistic models viewing pragmatic function decoding at the end of a language comprehension cascade. Instead, information about the social-interactive role of communicative acts is processed instantaneously

    Activation of the Left Inferior Frontal Gyrus in the First 200 ms of Reading: Evidence from Magnetoencephalography (MEG)

    Get PDF
    BACKGROUND: It is well established that the left inferior frontal gyrus plays a key role in the cerebral cortical network that supports reading and visual word recognition. Less clear is when in time this contribution begins. We used magnetoencephalography (MEG), which has both good spatial and excellent temporal resolution, to address this question. METHODOLOGY/PRINCIPAL FINDINGS: MEG data were recorded during a passive viewing paradigm, chosen to emphasize the stimulus-driven component of the cortical response, in which right-handed participants were presented words, consonant strings, and unfamiliar faces to central vision. Time-frequency analyses showed a left-lateralized inferior frontal gyrus (pars opercularis) response to words between 100-250 ms in the beta frequency band that was significantly stronger than the response to consonant strings or faces. The left inferior frontal gyrus response to words peaked at approximately 130 ms. This response was significantly later in time than the left middle occipital gyrus, which peaked at approximately 115 ms, but not significantly different from the peak response in the left mid fusiform gyrus, which peaked at approximately 140 ms, at a location coincident with the fMRI-defined visual word form area (VWFA). Significant responses were also detected to words in other parts of the reading network, including the anterior middle temporal gyrus, the left posterior middle temporal gyrus, the angular and supramarginal gyri, and the left superior temporal gyrus. CONCLUSIONS/SIGNIFICANCE: These findings suggest very early interactions between the vision and language domains during visual word recognition, with speech motor areas being activated at the same time as the orthographic word-form is being resolved within the fusiform gyrus. This challenges the conventional view of a temporally serial processing sequence for visual word recognition in which letter forms are initially decoded, interact with their phonological and semantic representations, and only then gain access to a speech code

    Identifying object categories from event-related EEG: Toward decoding of conceptual representations

    Get PDF
    Contains fulltext : 99404.pdf (publisher's version ) (Open Access)Multivariate pattern analysis is a technique that allows the decoding of conceptual information such as the semantic category of a perceived object from neuroimaging data. Impressive single-trial classification results have been reported in studies that used fMRI. Here, we investigate the possibility to identify conceptual representations from event-related EEG based on the presentation of an object in different modalities: its spoken name, its visual representation and its written name. We used Bayesian logistic regression with a multivariate Laplace prior for classification. Marked differences in classification performance were observed for the tested modalities. Highest accuracies (89% correctly classified trials) were attained when classifying object drawings. In auditory and orthographical modalities, results were lower though still significant for some subjects. The employed classification method allowed for a precise temporal localization of the features that contributed to the performance of the classifier for three modalities. These findings could help to further understand the mechanisms underlying conceptual representations. The study also provides a first step towards the use of concept decoding in the context of real-time brain-computer interface applications.12 p
    corecore